407 research outputs found
Some Applications of Coding Theory in Computational Complexity
Error-correcting codes and related combinatorial constructs play an important
role in several recent (and old) results in computational complexity theory. In
this paper we survey results on locally-testable and locally-decodable
error-correcting codes, and their applications to complexity theory and to
cryptography.
Locally decodable codes are error-correcting codes with sub-linear time
error-correcting algorithms. They are related to private information retrieval
(a type of cryptographic protocol), and they are used in average-case
complexity and to construct ``hard-core predicates'' for one-way permutations.
Locally testable codes are error-correcting codes with sub-linear time
error-detection algorithms, and they are the combinatorial core of
probabilistically checkable proofs
Inapproximability of Combinatorial Optimization Problems
We survey results on the hardness of approximating combinatorial optimization
problems
An Alon-Boppana Type Bound for Weighted Graphs and Lowerbounds for Spectral Sparsification
We prove the following Alon-Boppana type theorem for general (not necessarily
regular) weighted graphs: if is an -node weighted undirected graph of
average combinatorial degree (that is, has edges) and girth , and if are the
eigenvalues of the (non-normalized) Laplacian of , then (The Alon-Boppana theorem implies that if is unweighted and
-regular, then if the diameter is at least .)
Our result implies a lower bound for spectral sparsifiers. A graph is a
spectral -sparsifier of a graph if where is the Laplacian matrix of and is
the Laplacian matrix of . Batson, Spielman and Srivastava proved that for
every there is an -sparsifier of average degree where
and the edges of are a
(weighted) subset of the edges of . Batson, Spielman and Srivastava also
show that the bound on cannot be reduced below when is a clique; our Alon-Boppana-type result implies that
cannot be reduced below when comes
from a family of expanders of super-constant degree and super-constant girth.
The method of Batson, Spielman and Srivastava proves a more general result,
about sparsifying sums of rank-one matrices, and their method applies to an
"online" setting. We show that for the online matrix setting the bound is tight, up to lower order terms
Average-Case Complexity
We survey the average-case complexity of problems in NP.
We discuss various notions of good-on-average algorithms, and present
completeness results due to Impagliazzo and Levin. Such completeness results
establish the fact that if a certain specific (but somewhat artificial) NP
problem is easy-on-average with respect to the uniform distribution, then all
problems in NP are easy-on-average with respect to all samplable distributions.
Applying the theory to natural distributional problems remain an outstanding
open question. We review some natural distributional problems whose
average-case complexity is of particular interest and that do not yet fit into
this theory.
A major open question whether the existence of hard-on-average problems in NP
can be based on the PNP assumption or on related worst-case assumptions.
We review negative results showing that certain proof techniques cannot prove
such a result. While the relation between worst-case and average-case
complexity for general NP problems remains open, there has been progress in
understanding the relation between different ``degrees'' of average-case
complexity. We discuss some of these ``hardness amplification'' results
Approximating the Expansion Profile and Almost Optimal Local Graph Clustering
Spectral partitioning is a simple, nearly-linear time, algorithm to find
sparse cuts, and the Cheeger inequalities provide a worst-case guarantee for
the quality of the approximation found by the algorithm. Local graph
partitioning algorithms [ST08,ACL06,AP09] run in time that is nearly linear in
the size of the output set, and their approximation guarantee is worse than the
guarantee provided by the Cheeger inequalities by a polylogarithmic
factor. It has been a long standing open problem to design
a local graph clustering algorithm with an approximation guarantee close to the
guarantee of the Cheeger inequalities and with a running time nearly linear in
the size of the output.
In this paper we solve this problem; we design an algorithm with the same
guarantee (up to a constant factor) as the Cheeger inequality, that runs in
time slightly super linear in the size of the output. This is the first
sublinear (in the size of the input) time algorithm with almost the same
guarantee as the Cheeger's inequality. As a byproduct of our results, we prove
a bicriteria approximation algorithm for the expansion profile of any graph.
Let . There is a polynomial
time algorithm that, for any , finds a set of measure
, and expansion . Our proof techniques also provide a simpler
proof of the structural result of Arora, Barak, Steurer [ABS10], that can be
applied to irregular graphs.
Our main technical tool is that for any set of vertices of a graph, a
lazy -step random walk started from a randomly chosen vertex of , will
remain entirely inside with probability at least . This
itself provides a new lower bound to the uniform mixing time of any finite
states reversible markov chain
A New Regularity Lemma and Faster Approximation Algorithms for Low Threshold Rank Graphs
Kolla and Tulsiani [KT07,Kolla11} and Arora, Barak and Steurer [ABS10]
introduced the technique of subspace enumeration, which gives approximation
algorithms for graph problems such as unique games and small set expansion; the
running time of such algorithms is exponential in the threshold-rank of the
graph.
Guruswami and Sinop [GS11,GS12], and Barak, Raghavendra, and Steurer [BRS11]
developed an alternative approach to the design of approximation algorithms for
graphs of bounded threshold-rank, based on semidefinite programming relaxations
in the Lassere hierarchy and on novel rounding techniques. These algorithms are
faster than the ones based on subspace enumeration and work on a broad class of
problems.
In this paper we develop a third approach to the design of such algorithms.
We show, constructively, that graphs of bounded threshold-rank satisfy a weak
Szemeredi regularity lemma analogous to the one proved by Frieze and Kannan
[FK99] for dense graphs. The existence of efficient approximation algorithms is
then a consequence of the regularity lemma, as shown by Frieze and Kannan.
Applying our method to the Max Cut problem, we devise an algorithm that is
faster than all previous algorithms, and is easier to describe and analyze
Partitioning into Expanders
Let G=(V,E) be an undirected graph, lambda_k be the k-th smallest eigenvalue
of the normalized laplacian matrix of G. There is a basic fact in algebraic
graph theory that lambda_k > 0 if and only if G has at most k-1 connected
components. We prove a robust version of this fact. If lambda_k>0, then for
some 1\leq \ell\leq k-1, V can be {\em partitioned} into l sets P_1,\ldots,P_l
such that each P_i is a low-conductance set in G and induces a high conductance
induced subgraph. In particular, \phi(P_i)=O(l^3\sqrt{\lambda_l}) and
\phi(G[P_i]) >= \lambda_k/k^2).
We make our results algorithmic by designing a simple polynomial time
spectral algorithm to find such partitioning of G with a quadratic loss in the
inside conductance of P_i's. Unlike the recent results on higher order
Cheeger's inequality [LOT12,LRTV12], our algorithmic results do not use higher
order eigenfunctions of G. If there is a sufficiently large gap between
lambda_k and lambda_{k+1}, more precisely, if \lambda_{k+1} >= \poly(k)
lambda_{k}^{1/4} then our algorithm finds a k partitioning of V into sets
P_1,...,P_k such that the induced subgraph G[P_i] has a significantly larger
conductance than the conductance of P_i in G. Such a partitioning may represent
the best k clustering of G. Our algorithm is a simple local search that only
uses the Spectral Partitioning algorithm as a subroutine. We expect to see
further applications of this simple algorithm in clustering applications
Approximation of non-boolean 2CSP
We develop a polynomial time
approximate algorithm for Max 2CSP-, the problem where we are given a
collection of constraints, each involving two variables, where each variable
ranges over a set of size , and we want to find an assignment to the
variables that maximizes the number of satisfied constraints. Assuming the
Unique Games Conjecture, this is the best possible approximation up to constant
factors.
Previously, a -approximate algorithm was known, based on linear
programming. Our algorithm is based on semidefinite programming (SDP) and on a
novel rounding technique. The SDP that we use has an almost-matching
integrality gap
- …